feat: auto-detect image aspect ratio from prompt#125
Open
TriTue2011 wants to merge 315 commits into
Open
Conversation
Author
|
I have pushed two additional commits to this PR:
|
…GPT_TOKEN_X)" This reverts commit d236a14.
…nd, chat_complete, response)
…rs, search, backup, Vietnamese UI - BackendRouter: route text/image requests to best provider (chatgpt, opencode, sdwebui...) - OpenCode free provider (noAuth): bypasses ChatGPT 24KB payload limit for HA - RateLimitBackoff: exponential backoff 15 levels, per-model locking, error rules - SearchService: ChatGPT/Gemini/Serper/SearXNG/Brave configurable search backends - StateBackup: full-system export/import (accounts, providers, config, combos, image tasks) - Image adapters: 7 providers ported from 9router (SD WebUI, HuggingFace, Cloudflare, Fal.ai, Stability, BFL, Gemini) - Account service: virtual noAuth connections, health scoring (0.0-1.0) - API endpoints: /api/v1/backup, /api/v1/restore, /api/v1/backups, /api/v1/health, /api/v1/providers - UI: Vietnamese dark-mode dashboard, sidebar navigation, providers/combos/search/backup pages - Config: backends, providers, rate_limit, combo_models, search sections Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- GitHub Actions: build on main push + version tags, linux/amd64 + linux/arm64 - docker-compose.yml: healthcheck, named volume, clear env vars for Portainer - Tags: latest (main), sha-XXXX, vX.Y.Z (semver) Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- BackendRouter: resolve 'auto' → provider default model (nemotron-3-super-free for OpenCode) - New 9router backup import: extract codex/ChatGPT tokens from 9router backup JSON - New endpoint: POST /api/v1/import-9router — import tokens from 9router backup file - Supports .json and .json.gz backup files - Also extracts combo models and merges into config Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- 9r/ model prefix proxies chat to 9router (handles OAuth: Claude/Codex/Copilot) - ha-agent combo: 9router → OpenCode → ChatGPT (best fallback chain) - 9router auto-connection from backup import - Config: ninerouter.base_url + api_key section - No 24KB limit when using 9r/ or oc/ models Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- cx/ prefix uses Codex OAuth tokens to call api.openai.com directly - No proxy to 9router needed — tokens from backup work immediately - Gemini AI Studio provider (free 15 RPM or paid API key) - 9router backup import marks tokens as type=codex - add_accounts_with_type() for OAuth tokens - ha-agent combo: cx/auto → oc/auto → chatgpt/auto Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Add OAuth token import in Accounts page (type=codex) - File upload (drag-drop) for 9router backup on Backup page - POST /api/v1/import-9router-upload for direct JSON upload - POST /api/accounts/oauth for adding OAuth tokens - Gemini AI Studio provider fully integrated - account_service: add_accounts_with_type() for codex tokens Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
9router backup format has accessToken directly on connection objects, not nested inside a "data" field. Also fix extract_all_oauth_tokens. Local test: 10 Codex tokens extracted successfully from backup file. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Fix import-9router endpoint route name (was incorrectly /api/v1/backup) - Add oc/, cx/, ha-agent, combo models to /v1/models for HA visibility - Static models: oc/auto, oc/nemotron-3-super-free, cx/auto, ha-agent Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- _handle_opencode_chat now uses route.model (resolved) instead of raw model - import-9router-upload uses Pydantic dict body instead of raw request.body() Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…alling - 9r/auto proxies to 9router which handles OAuth for Claude/Codex/Copilot - ha-agent combo: 9r/auto → cx/auto → oc/auto - 9router connection configurable via config.json ninerouter section Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Calls same endpoint as 9router (not api.openai.com) - OpenAI chat format → Responses format translation - Tool calling support via Codex Responses API - No 9router dependency required - cx/auto routes to Codex OAuth tokens from backup import Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Default model: gpt-5.3-codex (works with free ChatGPT accounts) - Force stream=true and store=false (Codex API requirements) - Tested: STATUS 200 from chatgpt.com/backend-api/codex/responses Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Add missing 'import json' causing 500 on import-9router-upload - Accept 'codex' as valid provider (maps to openai_oauth) - Add codex/ prefix in backend_router Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- _stream_response now yields parsed dict chunks for FastAPI - FastAPI auto-serializes dicts to SSE format - Fixes "not an AssistantContent" error in HA Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…-agent - Log upstream error body from Codex endpoint - ha-agent combo: oc/auto first (working), cx/auto as backup Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Codex always returns SSE stream even when we want non-streaming. Now collects stream chunks into a single response dict. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…sing Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Codex rejects these params. Like 9router Codex executor transformRequest. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…fied multimodal models
Apply card-3d + card-tint-* styling consistently across all pages: - Dashboard, Models, Accounts, Providers, Combos, Search, Backup - Settings (11 components), Video, Video Manager - Changed border from invisible white to visible thin dark line - Removed bg-white overrides in favor of pastel tinted backgrounds Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Changed from near-white rgba(0.95 opacity) to solid Tailwind 100→200 level colors that are clearly distinguishable from page background. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Each card within the same page now has a different tint color: - Models: providers mapped to emerald/amber/violet/indigo - Providers: each provider gets a unique tint - Combos: cycle through 6 tints for combo cards - Settings: 11 components each assigned a distinct tint - Search: indigo/violet/sky for 3 config cards - Backup: amber import + cycling row tints - Video: indigo/violet, Manager: sky/rose - Accounts: emerald tint Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Filter bar moved above stat cards, centered layout - Stat cards now use card-3d for consistent thin dark borders - New tree view mode groups accounts by provider type (pro/codex/free/...) - Each group shows expandable header with status summary (active/limited/error counts) - Toggle between List view and Tree view Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- New .card-main class: white gradient + warm shadow for largest cards in 2-3 card layouts
- Refined .card-tint-* gradients: aurora-colored light cast effect ("bóng")
- Softer shadows: multi-layer subtle depth replacing harsh shadows
- Thinner border: rgba(0,0,0,0.08) for cleaner look
- Applied largest-card-white rule: accounts table, search combo, combos add-new, settings config
- Dashboard status bar uses card-main for clean footer
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Removed standalone filter section above stat cards - Search + Type + Status filters now sit right below action buttons row - Fixed Status filter: labels now use i18n translation instead of empty label prop - More compact layout: filters h-9, smaller inputs Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Backend: - _check_gemini_status now scans ALL custom providers, not just "geminiapi" - Returns per-instance status with port, models, clients, error details - New /api/v1/provider-tree endpoint: aggregates accounts + providers + custom APIs into hierarchical tree structure (ChatGPT types, Provider APIs, Custom instances) Frontend: - Dashboard: stat cards split into "Gemini API" + "Custom APIs" count - Dashboard: status bar dynamically renders all custom provider instances - Accounts: removed list view, tree view is now the only mode - Accounts: 3-level tree — Provider category → Sub-type/API → Account detail - Level 1: ChatGPT (grouped by type: free/pro/codex), Providers, Custom APIs - Level 2: account types or individual API instances with status indicators - Level 3: click account to expand quota + request stats panel - Each custom provider instance shows live health check status Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
When backend returns old format without "instances" array, the expression instances?.filter(...).length throws TypeError because ?.filter() returns undefined on undefined, and .length on undefined crashes. Added missing ?. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Reduced card padding p-4→p-3, icon circle size-9→size-7 - Smaller fonts: label 11px→10px, value text-2xl→text-xl - Grid: 2 cols mobile, 3 tablet, 5 desktop (was 6 but too cramped) - Tighter gap: gap-3→gap-2 Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- mergedTree combines groupedAccounts (from filteredAccounts) + providerTree - ChatGPT groups now built from filteredAccounts directly (supports search/filter) - Providers/Custom APIs branches still come from /api/v1/provider-tree - Fixed property mismatch: items vs accounts, added key/label/count fields - Accounts now visible even if provider-tree API fails Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…PI fails - fetchProviderTree now has fallback: if /api/v1/provider-tree fails, builds tree from /api/v1/health (gemini instances) + /api/v1/custom-providers - Providers branch: shows Gemini API status from health check - Custom APIs branch: shows all custom provider instances with status - Ensures all branches visible even without updated backend Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Fallback now handles both old and new health API formats: - New: health.gemini.instances array - Old: health.gemini.geminiapi string + geminiapi_port/clients/entries - Always merges custom-providers list from config - Uses seenIds to avoid duplicates between formats - Shows Gemini even with "no_key" status Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Grid: lg:grid-cols-6 for 6 cards on one row - Reduced padding p-3→p-2.5, icon size-7→size-6, icon 3.5→3 - Value font text-xl→text-lg, gap 2→1.5 - Removed shadow-sm from icon circle Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Row 1: "Tài khoản" title + count badge Row 2: Import, Custom API, Export Token buttons Row 3: Refresh, Refresh All, Search, Type filter, Status filter Removed old subtitle, cleaned up duplicate title in tree section Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…hboard) - fetchProviderTree now directly uses /api/v1/health + /api/v1/custom-providers - Same data sources as dashboard status bar (health.gemini.instances) - No dependency on /api/v1/provider-tree backend endpoint - Handles both new format (instances array) and old format (geminiapi field) - Always shows Providers + Custom APIs branches when data available Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Simplified buildProviderTree to fetch from: - /api/v1/custom-providers (all custom provider instances) - /api/v1/providers (built-in providers like opencode, gemini_free, etc.) Added console.error for debugging issues Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Cho phép tự động nhận diện tỷ lệ khung hình (ví dụ 16:9, 1:1, 4:3) từ nội dung của prompt để ghi đè lên cấu hình mặc định (khi kích thước bị Home Assistant hoặc các client khác gửi lên mặc định). Điều này cho phép ghi đè tham số \size\ bằng những keyword tìm thấy trong yêu cầu gốc của người dùng.